Towards a Maximum Entropy Method for Estimating HMM Parameters

نویسندگان

  • Christian J. Walder
  • Peter J. Kootsookos
  • Brian C. Lovell
چکیده

Training a Hidden Markov Model (HMM) to maximise the probability of a given sequence can result in over-fitting. That is, the model represents the training sequence well, but fails to generalise. In this paper, we present a possible solution to this problem, which is to maximise a linear combination of the likelihood of the training data, and the entropy of the model. We derive the necessary equations for gradient based maximisation of this combined term. The performance of the system is then evaluated in comparison with three other algorithms, on a classification task using synthetic data. The results indicate that the method is potentially useful. The main problem with the method is the computational intractability of the entropy calculation.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Feature dimension reduction using reduced-rank maximum likelihood estimation for hidden Markov models

This paper presents a new method of feature dimension reduction in hidden Markov modeling (HMM) for speech recognition. The key idea is to apply reduced rank maximum likelihood estimation in the M-step of the usual Baum-Welch algorithm for estimating HMM parameters such that the estimates of the Gaussian distribution parameters are restricted in a sub-space of reduced dimensionality. There are ...

متن کامل

Entropy-Based Parameter Estimation for the Four-Parameter Exponential Gamma Distribution

Two methods based on the principle of maximum entropy (POME), the ordinary entropy method (ENT) and the parameter space expansion method (PSEM), are developed for estimating the parameters of a four-parameter exponential gamma distribution. Using six data sets for annual precipitation at the Weihe River basin in China, the PSEM was applied for estimating parameters for the four-parameter expone...

متن کامل

Speech enhancement based on hidden Markov model using sparse code shrinkage

This paper presents a new hidden Markov model-based (HMM-based) speech enhancement framework based on the independent component analysis (ICA). We propose analytical procedures for training clean speech and noise models by the Baum re-estimation algorithm and present a Maximum a posterior (MAP) estimator based on Laplace-Gaussian (for clean speech and noise respectively) combination in the HMM ...

متن کامل

Hidden Markov Model for Speech Recognition

In this paper, a theoretical framework for Bayesian adaptive training of the parameters of discrete hidden Markov model (DHMM) and of semi-continuous HMM (SCHMM) with Gaussian mixture state observation densities is presented. In addition to formulating the forward-backward MAP (maximum a posterion’) and the segmental MAP algorithms for estimating the above HMM parameters, a computationally effi...

متن کامل

A novel discriminative method for HMM in automatic speech recognition

A novel discriminative method for estimating the parameters of Hidden Markov Models (HMMs) is described. In this method, the parameter values are chosen to ensure that the characteristics of each sound class can be maximally separated. Compared with the significant method known as the Maximum Mutual Information (MMI) estimation, the novel method represented in this paper adopts a new kind of cr...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2003